Skip to content

Adding magics functionality to the litellm implementation #1437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Aug 12, 2025

Conversation

srdas
Copy link
Collaborator

@srdas srdas commented Jul 29, 2025

Refactored various functionality for the %ai and %%ai magics to use the new litellm backbone added in PR #1426 . Fixes Issue #1432

  1. Updated the function run_ai_cell to invoke litellm instead of langchain.
  2. Modified the register command to alias which is more natural when registering an alias for a model_id.
  3. Modified the delete command to dealias which is more natural.
  4. Did not remove the update command as it is working well and is convenient.
  5. Checked that the help is working and shows the updated commands.
  6. Refactored the list command to show model IDs and aliases.
  7. Checked that the reset command is working.
  8. Rename default_language_model to initial_language_model.
  9. Rename aliases to initial_aliases.
  10. Changed error command to fix, updated it's handling.
  11. Confirmed that this works with new API keys handling for litellm using the .env approach.

See the demo below showing all of it working:

Screen.Recording.2025-07-30.at.2.30.01.PM.mov

Nice to have (among other items):

  1. Update format options to be much more stable using agentic capabilities with better prompts.
  2. See if it is possible to add IPython widgets as another output format option.
  3. Update markdown format to handle tables.
  4. See other TODO commented items in magics.py for consideration.

@srdas srdas added the enhancement New feature or request label Jul 29, 2025
Copy link
Member

@dlqqq dlqqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srdas This looks great, thank you! Just a few comments. Haven't tested it yet, have to join standup now.

@srdas srdas marked this pull request as ready for review August 12, 2025 00:32
Copy link
Member

@dlqqq dlqqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srdas This is a great start, thank you! Just one extra comment below.

In addition, I've opened some follow-up issues (titled [magics] ...) for you to explore. They can be addressed in separate PRs.

This is already a fantastic start, and I'm excited to merge this once the last comment is addressed.

Comment on lines +369 to +370
self.transcript.append({"role": "user", "content": prompt})
self.transcript.append({"role": "assistant", "content": output})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you bound this list to be of length self.max_history * 2? Right now, the ai() method is manually truncating self.transcript every time it appends it to the chat history. It would be more reliable to add some logic to bound the length of self.transcript in the _append_exchange() method itself.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is now done. Tested everything to make sure it is working. Here is a notebook if you want to test all the magics features (unzip to use).
magics_litellm.ipynb.zip

@srdas srdas merged commit 4aa7ce4 into jupyterlab:litellm Aug 12, 2025
4 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants